20 research outputs found

    What does not happen: quantifying embodied engagement using NIMI and self-adaptors

    Get PDF
    Previous research into the quantification of embodied intellectual and emotional engagement using non-verbal movement parameters has not yielded consistent results across different studies. Our research introduces NIMI (Non-Instrumental Movement Inhibition) as an alternative parameter. We propose that the absence of certain types of possible movements can be a more holistic proxy for cognitive engagement with media (in seated persons) than searching for the presence of other movements. Rather than analyzing total movement as an indicator of engagement, our research team distinguishes between instrumental movements (i.e. physical movement serving a direct purpose in the given situation) and non-instrumental movements, and investigates them in the context of the narrative rhythm of the stimulus. We demonstrate that NIMI occurs by showing viewers’ movement levels entrained (i.e. synchronised) to the repeating narrative rhythm of a timed computer-presented quiz. Finally, we discuss the role of objective metrics of engagement in future context-aware analysis of human behaviour in audience research, interactive media and responsive system and interface design

    A time series feature of variability to detect two types of boredom from motion capture of the head and shoulders

    Get PDF
    Boredom and disengagement metrics are crucial to the correctly timed implementation of adaptive interventions in interactive systems. psychological research suggests that boredom (which other HCI teams have been able to partially quantify with pressure-sensing chair mats) is actually a composite: lethargy and restlessness. Here we present an innovative approach to the measurement and recognition of these two kinds of boredom, based on motion capture and video analysis of changes in head and shoulder positions. Discrete, three-minute, computer-presented stimuli (games, quizzes, films and music) covering a spectrum from engaging to boring/disengaging were used to elicit changes in cognitive/emotional states in seated, healthy volunteers. Interaction with the stimuli occurred with a handheld trackball instead of a mouse, so movements were assumed to be non-instrumental. Our results include a feature (standard deviation of windowed ranges) that may be more specific to boredom than mean speed of head movement, and that could be implemented in computer vision algorithms for disengagement detection

    Experience design: video without faces increases engagement but not empathy

    Get PDF
    Counter to prior claims that empathy is required for higher levels of engagement in human-computer interaction, our team has previously found that, in an analysis of 844 stimulus presentations, empathy is sufficient for high engagement, but is not necessary. Here, we ran a carefully controlled study of human-computer interactions with musical stimuli --- with and without visuals, and with and without recognizable people -- to directly test whether we could design an engaging stimulus that did not elicit empathy, by avoiding human faces or personal interaction. We measured subjective responses by visual analogue scale and found that the faceless stimulus was as engaging as the face-containing stimulus, but much less empathy-provoking. Therefore, we propose that empathy and engagement be considered independently during interaction design, because they are not monotonically related

    The complex relationship between empathy, engagement and boredom

    Get PDF
    In human computer interactions — especially gaming — the role of empathy has been mooted as a necessary prerequisite for higher levels of engagement and immersion. More recently other forms of engagement, including intellectual/cognitive engagement, have been proposed. In this study we present a carefully controlled dataset of human-computer interactions with a wide range of stimuli that ranged from highly engaging to boring to test these two theories. Analyzing 844 response sets to visual analogue scales (VAS) for empathy, interest, boredom, and engagement, we found that high empathy was sufficient for high engagement but is not necessary, whilst the converse was not true. We also found that empathy and boredom were incompatible with each other, but low levels of either were permissive rather than causal to the other. We conclude that there is no monotonic relationship between increasing empathy and engagement; either empathy is a sufficient (but not necessary) cause of engagement, or engagement is a necessary precursor to high empathy

    Spelling errors and shouting capitalization lead to additive penalties to trustworthiness of online health information: randomized experiment with laypersons

    Get PDF
    Background: The written format and literacy competence of screen-based texts can interfere with the perceived trustworthiness of health information in online forums, independent of the semantic content. Unlike in professional content, the format in unmoderated forums can regularly hint at incivility, perceived as deliberate rudeness or casual disregard toward the reader, for example, through spelling errors and unnecessary emphatic capitalization of whole words (online shouting). Objective: This study aimed to quantify the comparative effects of spelling errors and inappropriate capitalization on ratings of trustworthiness independently of lay insight and to determine whether these changes act synergistically or additively on the ratings. Methods: In web-based experiments, 301 UK-recruited participants rated 36 randomized short stimulus excerpts (in the format of information from an unmoderated health forum about multiple sclerosis) for trustworthiness using a semantic differential slider. A total of 9 control excerpts were compared with matching error-containing excerpts. Each matching error-containing excerpt included 5 instances of misspelling, or 5 instances of inappropriate capitalization (shouting), or a combination of 5 misspelling plus 5 inappropriate capitalization errors. Data were analyzed in a linear mixed effects model. Results: The mean trustworthiness ratings of the control excerpts ranged from 32.59 to 62.31 (rating scale 0-100). Compared with the control excerpts, excerpts containing only misspellings were rated as being 8.86 points less trustworthy, those containing inappropriate capitalization were rated as 6.41 points less trustworthy, and those containing the combination of misspelling and capitalization were rated as 14.33 points less trustworthy (P<.001 for all). Misspelling and inappropriate capitalization show an additive effect. Conclusions: Distinct indicators of incivility independently and additively penalize the perceived trustworthiness of online text independently of lay insight, eliciting a medium effect size
    corecore